166 research outputs found

    Caught in the act of an insider attack: detection and assessment of insider threat

    Get PDF
    The greatest asset that any organisation has are its people, but they may also be the greatest threat. Those who are within the organisation may have authorised access to vast amounts of sensitive company records that are essential for maintaining competitiveness and market position, and knowledge of information services and procedures that are crucial for daily operations. In many cases, those who have such access do indeed require it in order to conduct their expected workload. However, should an individual choose to act against the organisation, then with their privileged access and their extensive knowledge, they are well positioned to cause serious damage. Insider threat is becoming a serious and increasing concern for many organisations, with those who have fallen victim to such attacks suffering significant damages including financial and reputational. It is clear then, that there is a desperate need for more effective tools for detecting the presence of insider threats and analyzing the potential of threats before they escalate. We propose Corporate Insider Threat Detection (CITD), an anomaly detection system that is the result of a multi-disciplinary research project that incorporates technical and behavioural activities to assess the threat posed by individuals. The system identifies user and role-based profiles, and measures how users deviate from their observed behaviours to assess the potential threat that a series of activities may pose. In this paper, we present an overview of the system and describe the concept of operations and practicalities of deploying the system. We show how the system can be utilised for unsupervised detection, and also how the human analyst can engage to provide an active learning feedback loop. By adopting an accept or reject scheme, the analyst is capable of refining the underlying detection model to better support their decisionmaking process and significant reduce the false positive rate

    A Storm in an IoT Cup: The Emergence of Cyber-Physical Social Machines

    Full text link
    The concept of social machines is increasingly being used to characterise various socio-cognitive spaces on the Web. Social machines are human collectives using networked digital technology which initiate real-world processes and activities including human communication, interactions and knowledge creation. As such, they continuously emerge and fade on the Web. The relationship between humans and machines is made more complex by the adoption of Internet of Things (IoT) sensors and devices. The scale, automation, continuous sensing, and actuation capabilities of these devices add an extra dimension to the relationship between humans and machines making it difficult to understand their evolution at either the systemic or the conceptual level. This article describes these new socio-technical systems, which we term Cyber-Physical Social Machines, through different exemplars, and considers the associated challenges of security and privacy.Comment: 14 pages, 4 figure

    Automated insider threat detection system using user and role-based profile assessment

    Get PDF
    © 2007-2012 IEEE. Organizations are experiencing an ever-growing concern of how to identify and defend against insider threats. Those who have authorized access to sensitive organizational data are placed in a position of power that could well be abused and could cause significant damage to an organization. This could range from financial theft and intellectual property theft to the destruction of property and business reputation. Traditional intrusion detection systems are neither designed nor capable of identifying those who act maliciously within an organization. In this paper, we describe an automated system that is capable of detecting insider threats within an organization. We define a tree-structure profiling approach that incorporates the details of activities conducted by each user and each job role and then use this to obtain a consistent representation of features that provide a rich description of the user's behavior. Deviation can be assessed based on the amount of variance that each user exhibits across multiple attributes, compared against their peers. We have performed experimentation using ten synthetic data-driven scenarios and found that the system can identify anomalous behavior that may be indicative of a potential threat. We also show how our detection system can be combined with visual analytics tools to support further investigation by an analyst

    A picture tells a thousand words: what Facebook and Twitter images convey about our personality

    Get PDF
    Engineering and Physical Sciences Research Council (EPSRC), EP/J004995/1: An Exploration of Superidentit

    Data Presentation in Security Operations Centres: Exploring the Potential for Sonification to Enhance Existing Practice

    Get PDF
    Security practitioners working in Security Operations Centres (SOCs) are responsible for detecting and mitigating malicious computer-network activity. This work requires both automated tools that detect and prevent attacks, and data-presentation tools that can present pertinent network-security monitoring information to practitioners in an efficient and comprehensible manner. In recent years, advances have been made in the development of visual approaches to data presentation, with some uptake of advanced security visualization tools in SOCs. Sonification, in which data is represented as sound, is said to have potential as an approach that could work alongside existing visual data-presentation approaches to address some of the unique challenges faced by SOCs. For example, sonification has been shown to enable peripheral monitoring of processes, which could aid practitioners multitasking in busy SOCs. The perspectives of security practitioners on incorporating sonification into their actual working environments have not yet been examined, however. The aim of this paper therefore is to address this gap by exploring attitudes to using sonification in SOCs, and identifying the data-presentation approaches currently used. We report on the results of a study consisting of an online survey (N=20) and interviews (N=21) with security practitioners working in a range of different SOCs. Our contributions are (1) a refined appreciation of the contexts in which sonification could aid in SOC working practice, (2) an understanding of the areas in which sonification may not be beneficial or may even be problematic, (3) an analysis of the critical requirements for the design of sonification systems and their integration into the SOC setting, and (4) evidence of the visual data-presentation techniques currently used and identification of how sonification might work alongside and address challenges to using them. Our findings clarify insights into the potential benefits and challenges of introducing sonification to support work in this vital security-monitoring environment. Participants saw potential value in using sonification systems to aid in anomaly-detection tasks in SOCs (such as retrospective hunting), as well as in situations in which peripheral monitoring is desirable: while multitasking with multiple work tasks, or while outside of the SOC

    Anomaly detection using pattern-of-life visual metaphors

    Get PDF
    Complex dependencies exist across the technology estate, users and purposes of machines. This can make it difficult to efficiently detect attacks. Visualization to date is mainly used to communicate patterns of raw logs, or to visualize the output of detection systems. In this paper we explore a novel approach to presenting cybersecurity-related information to analysts. Specifically, we investigate the feasibility of using visualizations to make analysts become anomaly detectors using Pattern-of-Life Visual Metaphors. Unlike glyph metaphors, the visualizations themselves (rather than any single visual variable on screen) transform complex systems into simpler ones using different mapping strategies. We postulate that such mapping strategies can yield new, meaningful ways to showing anomalies in a manner that can be easily identified by analysts. We present a classification system to describe machine and human activities on a host machine, a strategy to map machine dependencies and activities to a metaphor. We then present two examples, each with three attack scenarios, running data generated from attacks that affect confidentiality, integrity and availability of machines. Finally, we present three in-depth use-case studies to assess feasibility (i.e. can this general approach be used to detect anomalies in systems?), usability and detection abilities of our approach. Our findings suggest that our general approach is easy to use to detect anomalies in complex systems, but the type of metaphor has an impact on user's ability to detect anomalies. Similar to other anomaly-detection techniques, false positives do exist in our general approach as well. Future work will need to investigate optimal mapping strategies, other metaphors, and examine how our approach compares to and can complement existing techniques

    The perfect storm: The privacy paradox and the Internet-of-Things

    Get PDF
    Privacy is a concept found throughout human history and opinion polls suggest that the public value this principle. However, while many individuals claim to care about privacy, they are often perceived to express behaviour to the contrary. This phenomenon is known as the Privacy Paradox and its existence has been validated through numerous psychological, economic and computer science studies. Several contributory factors have been suggested including user interface design, risk saliency, social norms and default configurations. We posit that the further proliferation of the Internet-of-Things (IoT) will aggravate many of these factors, posing even greater risks to individuals’ privacy. This paper explores the evolution of both the paradox and the IoT, discusses how privacy risk might alter over the coming years, and suggests further research required to address a reasonable balance. We believe both technological and socio-technical measures are necessary to ensure privacy is protected in a world of ubiquitous data collection

    Privacy is the boring bit: User perceptions and behaviour in the Internet-of-Things

    Get PDF
    In opinion polls, the public frequently claim to value their privacy. However, individuals often seem to overlook the principle, contributing to a disparity labelled the `Privacy Paradox'. The growth of the Internet-of-Things (IoT) is frequently claimed to place privacy at risk. However, the Paradox remains underexplored in the IoT. In addressing this, we first conduct an online survey (N = 170) to compare public opinions of IoT and less-novel devices. Although we find users perceive privacy risks, many decide to purchase smart devices. With the IoT rated less usable/familiar, we assert that it constrains protective behaviour. To explore this hypothesis, we perform contextualised interviews (N = 40) with the general public. In these dialogues, owners discuss their opinions and actions with a personal device. We find the Paradox is significantly more prevalent in the IoT, frequently justified by a lack of awareness. We finish by highlighting the qualitative comments of users, and suggesting practical solutions to their issues. This is the first work, to our knowledge, to evaluate the Privacy Paradox over a broad range of technologies

    Privacy Salience: Taxonomies and Research Opportunities

    Get PDF
    Privacy is a well-understood concept in the physical world, with us all desiring some escape from the public gaze. However, while individuals might recognise locking doors as protecting privacy, they have difficulty practising equivalent actions online. Privacy salience considers the tangibility of this important principle; one which is often obscured in digital environments. Through extensively surveying a range of studies, we construct the first taxonomies of privacy salience. After coding articles and identifying commonalities, we categorise works by their methodologies, platforms and underlying themes. While web browsing appears to be frequently analysed, the Internet-of-Things has received little attention. Through our use of category tuples and frequency matrices, we then explore those research opportunities which might have been overlooked. These include studies of targeted advertising and its affect on salience in social networks. It is through refining our understanding of this important topic that we can better highlight the subject of privacy

    (Smart)Watch Out! Encouraging Privacy-Protective Behavior through Interactive Games

    Get PDF
    The public frequently appear to overlook privacy, even when they claim to value it. This disparity between concern and behavior is known as the Privacy Paradox. Smartwatches are novel products that offer helpful functionality. However, although they often store sensitive data (e.g. text messages), owners rarely use protective features (e.g. app permissions). Campaigns have sought to increase privacy awareness, but initiatives tend to be ineffective. We therefore explore the efficacy of a serious game in encouraging protective smartwatch behavior. The application is designed with Learning Science principles and evaluated through a study with 504 smartwatch owners. After soliciting concerns and behavior, our treatment group [n = 252] play the online simulation. Our control group do not participate [n = 252], as we seek to limit extraneous variables. In a follow-up session, all users report posttest responses and qualitative justifications. We appear to encourage protective behavior, with our treatment group using privacy features more often. We also significantly reduce the prevalence of the Paradox, realigning behavior with concern. These quantitative findings are complemented by an inductive analysis of user rationale. Smartwatch behavior is influenced by several factors, including privacy awareness and data sensitivity. Finally, we use Protection Motivation Theory (PMT) to develop intervention recommendations. These include risk exposure tools and protective demonstrations. To our knowledge, this is the first tool to encourage protective smartwatch behavior
    • …
    corecore